Inter-modality Face Recognition

نویسندگان

  • Dahua Lin
  • Xiaoou Tang
چکیده

Recently, the wide deployment of practical face recognition systems gives rise to the emergence of the inter-modality face recognition problem. In this problem, the face images in the database and the query images captured on spot are acquired under quite different conditions or even using different equipments. Conventional approaches either treat the samples in a uniform model or introduce an intermediate conversion stage, both of which would lead to severe performance degradation due to the great discrepancies between different modalities. In this paper, we propose a novel algorithm called Common Discriminant Feature Extraction specially tailored to the inter-modality problem. In the algorithm, two transforms are simultaneously learned to transform the samples in both modalities respectively to the common feature space. We formulate the learning objective by incorporating both the empirical discriminative power and the local smoothness of the feature transformation. By explicitly controlling the model complexity through the smoothness constraint, we can effectively reduce the risk of overfitting and enhance the generalization capability. Furthermore, to cope with the nongaussian distribution and diverse variations in the sample space, we develop two nonlinear extensions of the algorithm: one is based on kernelization, while the other is a multi-mode framework. These extensions substantially improve the recognition performance in complex situation. Extensive experiments are conducted to test our algorithms in two application scenarios: optical image-infrared image recognition and photo-sketch recognition. Our algorithms show excellent performance in the experiments.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Fusion of Intra- and Inter-modality Algorithms for Face-Sketch Recognition

 Combination of Eigentransformation, a global intra-modality method [1], with the Eigenpatches local intra-modality approach that has been used for face hallucination [2]. For both approaches, photos are transformed to sketches or vice versa to reduce the modality gap. This allows recognition using a traditional face recogniser.  These algorithms are further fused with an inter-modality metho...

متن کامل

Hybridization of Facial Features and Use of Multi Modal Information for 3D Face Recognition

Despite of achieving good performance in controlled environment, the conventional 3D face recognition systems still encounter problems in handling the large variations in lighting conditions, facial expression and head pose The humans use the hybrid approach to recognize faces and therefore in this proposed method the human face recognition ability is incorporated by combining global and local ...

متن کامل

Video-based face recognition in color space by graph-based discriminant analysis

Video-based face recognition has attracted significant attention in many applications such as media technology, network security, human-machine interfaces, and automatic access control system in the past decade. The usual way for face recognition is based upon the grayscale image produced by combining the three color component images. In this work, we consider grayscale image as well as color s...

متن کامل

An Approach: Modality Reduction and Face

-NN classifier for better to much harder than normal face recognition based on photo images. To recognize a face sketch through photo database, it is necessary to reduce the modality difference between face photo and face sketch images. Every face recognition system need to perform mainly three subtasks: face detection, feature extraction and classification. But face sketch recognition system t...

متن کامل

Introducing the Geneva Multimodal Emotion Portrayal (GEMEP) Corpus

In this chapter we outline the requirements for a systematic corpus of actor portrayals and describe the development, recording, editing, and validating of a major new corpus, the Geneva Multimodal Emotion Portrayal (GEMEP). This corpus consists of more than 7,000 audio-video emotion portrayals, representing 18 emotions (including rarely studied subtle emotions), portrayed by 10 professional ac...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2006